SlideShare a Scribd company logo
Vector Quantization

  Aniruddh Tyagi
     02-06-12
Voronoi Region
• Blocks:
   o   A sequence of audio.
   o   A block of image pixels.
       Formally: vector example: (0.2, 0.3, 0.5, 0.1)
• A vector quantizer maps k-dimensional vectors in the vector
  space Rk into a finite set of vectors Y = {yi: i = 1, 2, ..., N}. Each
  vector yi is called a code vector or a codeword. and the set of all
  the codewords is called a codebook. Associated with each
  codeword, yi, is a nearest neighbor region called Voronoi region,
  and it is defined by:


• The set of Voronoi regions partition the entire space Rk .
Two Dimensional Voronoi Diagram




Codewords in 2-dimensional space. Input vectors are marked with an
x, codewords are marked with red circles, and the Voronoi regions are
separated with boundary lines.
The Schematic of a Vector
Quantizer
Compression Formula
• Amount of compression:
   o Codebook size is K, input vector of dimension L
   o In order to inform the decoder of which code vector is
     selected, we need to use bits.
        E.g. need 8 bits to represent 256 code vectors.
   o Rate: each code vector contains the reconstruction value of L
     source output samples, the number of bits per sample would
     be: .
   o Sample: a scalar value in vector.
   o K: level of vector quantizer.
VQ vs SQ
Advantage of VQ over SQ:
• For a given rate, VQ results in a lower distortion than
  SQ.
• If the source output is correlate, vectors of source
  output values will tend to fall in clusters.
   o   E.g. Sayood’s book Exp 9.3.1
• Even if no dependency: greater flexibility.
   o   E.g. Sayood’s book Exp 9.3.2
Algorithms
• Lloyd algorithm: pdf-optimized
  quantizer, assume that distribution is
  known
• LBG: (VQ)
  o   Continuous (require integral ooperation)
  o   Modified: with training set.
LBG Algorithm
– Determine the number of codewords, N, or the size of the
  codebook.
– Select N codewords at random, and let that be the initial
  codebook. The initial codewords can be randomly chosen from the set
  of input vectors.
– Using the Euclidean distance measure clusterize the vectors
  around each codeword. This is done by taking each input vector and
  finding the Euclidean distance between it and each codeword. The
  input vector belongs to the cluster of the codeword that yields the
  minimum distance.
LBG Algorithm (contd.)
4.Compute the new set of codewords. This is done by obtaining the
average of each cluster. Add the component of each vector and divide by
the number of vectors in the cluster.



where i is the component of each vector (x, y, z, ... directions), m is the
number of vectors in the cluster.

5.Repeat steps 2 and 3 until the either the codewords don't change or
the change in the codewords is small.
Other Algorithms
• Problems: LBG is a greedy algorithm, may fall into
  Local minimum.
• Four methods selecting initial vectors:
  o   Random
  o   Splitting ( with perturbation vector) Animation
  o   Train with different subset
  o   PNN (pairwise nearest neighbor)
• Empty cell problem:
  o   No input correspond to a output vector
  o   Solution: give to other clusters, e.g. most populate cluster.
LBG for image compression
• Taking blocks of images as vector L=NM.
• If K vectors in code book:
   o need to use bits.
   o Rate:
• The higher the value K, the better quality, but lower compression
  ratio.
• Overhead to transmit code book:




• Train with a set of images.
Rate_Dimension Product
• Rate-dimension product
  o The size of the codebook increase exponentially with the rate.
  o Suppose we want to encode a source using R bits/sample. If
    we use an L-d quantizer, we would group L samples together
    into vectors. This means that we would have RL bits available
    to represent wach vector.
  o With RL bits, we can represent 2^(RL) output vectors.
Tree structured VQ
• Set vectors in different quadrant. Only signs of vectors need to
  be compared. Thus reduce the number of comparisons by 2^L
  for L-d vector problem.
• It works well for symmetric distribution. But not when we lose
  more and more symmetry.
Tree Structured Vector Quantizer
•   Extend to non-symmetric case:
    o   Divide the set of output points into two groups, g0 and g1, and assign to
        each group a test vector s.t. output points in each group are closer to test
        vector assigned to that group than to the test vector assigned to the other
        group.
    o   Label the two test vectors 0 and 1.
    o   When we got an input vector, compare it against the test vectors.
        Depending on the outcome, the input is compared to the output points
        associated with the test vector closest to the input.
    o   After these two comparisons, we can discard half of the output points.
    o   Comparison with the test vectors takes the place of looking at the signs of
        the components to decide which set of output points to discard from
        contention.
    o   If the total number of output points is K, we make( K/2)+2 comparisons
        instead of K comparisons.
    o   Can continue to expand the number of groups. Finally: 2logK comparisons
        instead of K.( 2 comparisons with the test vectors and a total of logK stages
Tree Structured VQ (continued)
• Since the test vectors are assigned to groups: 0, 1,
  00,01,10,11,000,001,010,011,100,101,110,111 etc.
  which are the nodes of a binary tree, the VQ has the
  name “Tree Structured VQ”.
• Penalty:
  o Possible increase in distortion: it is possible that at some
    stage the input is closer to one test vector while at the same
    time being closest to an output belonging to the rejected
    group.
  o Increase storage: output points from VQ codebook plus the
    test vectors.
Additional Links
• Slides are adapted from:
http://www.data-compression.com
and
http://www.geocities.com/mohamedqase
m/vectorquantization/vq.html

More Related Content

PPT
vector QUANTIZATION
PDF
Speech recognition using vector quantization through modified k means lbg alg...
PPTX
Vector quantization
PDF
Recurrent Instance Segmentation (UPC Reading Group)
PDF
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
PPT
Shai Avidan's Support vector tracking and ensemble tracking
PDF
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
PPTX
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
vector QUANTIZATION
Speech recognition using vector quantization through modified k means lbg alg...
Vector quantization
Recurrent Instance Segmentation (UPC Reading Group)
AILABS - Lecture Series - Is AI the New Electricity? Topic:- Classification a...
Shai Avidan's Support vector tracking and ensemble tracking
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...

What's hot (19)

PDF
Object Segmentation (D2L7 Insight@DCU Machine Learning Workshop 2017)
PDF
Focal loss for dense object detection
PPTX
Tutorial on Object Detection (Faster R-CNN)
PPTX
Back propagation network
PDF
Deep Learning for Computer Vision: Deep Networks (UPC 2016)
PDF
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
PDF
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
PDF
Implicit schemes for wave models
PDF
Joint unsupervised learning of deep representations and image clusters
PDF
Deep Learning for Computer Vision: Attention Models (UPC 2016)
PPTX
PDF
Generative Models and Adversarial Training (D2L3 Insight@DCU Machine Learning...
PDF
Software Frameworks for Deep Learning (D1L7 2017 UPC Deep Learning for Comput...
PPTX
Cyclic Redundancy Check
PDF
Mask R-CNN
PDF
Convolutional Neural Networks (D1L3 2017 UPC Deep Learning for Computer Vision)
PPTX
DBSCAN (2014_11_25 06_21_12 UTC)
PDF
LeNet-5
Object Segmentation (D2L7 Insight@DCU Machine Learning Workshop 2017)
Focal loss for dense object detection
Tutorial on Object Detection (Faster R-CNN)
Back propagation network
Deep Learning for Computer Vision: Deep Networks (UPC 2016)
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Convolutional Neural Networks - Veronica Vilaplana - UPC Barcelona 2018
Implicit schemes for wave models
Joint unsupervised learning of deep representations and image clusters
Deep Learning for Computer Vision: Attention Models (UPC 2016)
Generative Models and Adversarial Training (D2L3 Insight@DCU Machine Learning...
Software Frameworks for Deep Learning (D1L7 2017 UPC Deep Learning for Comput...
Cyclic Redundancy Check
Mask R-CNN
Convolutional Neural Networks (D1L3 2017 UPC Deep Learning for Computer Vision)
DBSCAN (2014_11_25 06_21_12 UTC)
LeNet-5
Ad

Viewers also liked (18)

PDF
Basic of BISS
PDF
fundamentals_satellite_communication_part_1
PDF
euler theorm
PDF
DIC_video_coding_standards_07
PDF
video_compression_2004
PDF
quantization
PDF
Advformat_0609
PDF
Discrete cosine transform
PDF
DVB_Arch
PDF
intro_dgital_TV
PDF
IntrRSCode
PDF
PDF
Basic of BISS
PDF
whitepaper_mpeg-if_understanding_mpeg4
PDF
Discrete cosine transform
PDF
fundamentals_satellite_communication_part_1
PDF
euler theorm
PDF
DVB_Arch
Basic of BISS
fundamentals_satellite_communication_part_1
euler theorm
DIC_video_coding_standards_07
video_compression_2004
quantization
Advformat_0609
Discrete cosine transform
DVB_Arch
intro_dgital_TV
IntrRSCode
Basic of BISS
whitepaper_mpeg-if_understanding_mpeg4
Discrete cosine transform
fundamentals_satellite_communication_part_1
euler theorm
DVB_Arch
Ad

Similar to vector QUANTIZATION (20)

PDF
Parallelization of the LBG Vector Quantization Algorithm for Shared Memory Sy...
PDF
Unit 5 Quantization
PDF
Performance Comparison of K-means Codebook Optimization using different Clust...
PDF
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
PPTX
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
PPTX
Pyramid Vector Quantization
PPTX
Data Compression (Structured Vector Quantization)
PPTX
Vector Quantization Vs Scalar Quantization
PDF
MDCT audio coding with pulse vector quantizers
PDF
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
PDF
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...
PPTX
Multimedia lossy compression algorithms
PPTX
Structure Vector Quantizes In Data Compression
PDF
Developing and comparing an encoding system using vector quantization &
PDF
Developing and comparing an encoding system using vector quantization &
PDF
An Algorithm For Vector Quantizer Design
PDF
Performance Comparison of Automatic Speaker Recognition using Vector Quantiza...
PDF
Performance Improvement of Vector Quantization with Bit-parallelism Hardware
PDF
Nq2422332236
PDF
FR1.L09 - PREDICTIVE QUANTIZATION OF DECHIRPED SPOTLIGHT-MODE SAR RAW DATA IN...
Parallelization of the LBG Vector Quantization Algorithm for Shared Memory Sy...
Unit 5 Quantization
Performance Comparison of K-means Codebook Optimization using different Clust...
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
Project pptVLSI ARCHITECTURE FOR AN IMAGE COMPRESSION SYSTEM USING VECTOR QUA...
Pyramid Vector Quantization
Data Compression (Structured Vector Quantization)
Vector Quantization Vs Scalar Quantization
MDCT audio coding with pulse vector quantizers
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
Investigative Compression Of Lossy Images By Enactment Of Lattice Vector Quan...
Multimedia lossy compression algorithms
Structure Vector Quantizes In Data Compression
Developing and comparing an encoding system using vector quantization &
Developing and comparing an encoding system using vector quantization &
An Algorithm For Vector Quantizer Design
Performance Comparison of Automatic Speaker Recognition using Vector Quantiza...
Performance Improvement of Vector Quantization with Bit-parallelism Hardware
Nq2422332236
FR1.L09 - PREDICTIVE QUANTIZATION OF DECHIRPED SPOTLIGHT-MODE SAR RAW DATA IN...

More from aniruddh Tyagi (20)

PDF
whitepaper_mpeg-if_understanding_mpeg4
PDF
BUC BLOCK UP CONVERTER
PDF
digital_set_top_box2
PDF
EBU_DVB_S2 READY TO LIFT OFF
PDF
ADVANCED DVB-C,DVB-S STB DEMOD
PDF
haffman coding DCT transform
PDF
Classification
PDF
tyagi 's doc
PDF
quantization_PCM
PDF
ECMG & EMMG protocol
PDF
7015567A
PDF
art_sklar7_reed-solomon
PDF
DVBSimulcrypt2
PDF
en_302769v010101v
PDF
Euler formula
PDF
video compression
PDF
fundamentals of linux
PDF
IBCBarconetTransratingEfficiency
whitepaper_mpeg-if_understanding_mpeg4
BUC BLOCK UP CONVERTER
digital_set_top_box2
EBU_DVB_S2 READY TO LIFT OFF
ADVANCED DVB-C,DVB-S STB DEMOD
haffman coding DCT transform
Classification
tyagi 's doc
quantization_PCM
ECMG & EMMG protocol
7015567A
art_sklar7_reed-solomon
DVBSimulcrypt2
en_302769v010101v
Euler formula
video compression
fundamentals of linux
IBCBarconetTransratingEfficiency

Recently uploaded (20)

PPTX
Machine Learning_overview_presentation.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
sap open course for s4hana steps from ECC to s4
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
cuic standard and advanced reporting.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Spectral efficient network and resource selection model in 5G networks
DOCX
The AUB Centre for AI in Media Proposal.docx
Machine Learning_overview_presentation.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
NewMind AI Weekly Chronicles - August'25-Week II
sap open course for s4hana steps from ECC to s4
A comparative analysis of optical character recognition models for extracting...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
cuic standard and advanced reporting.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Building Integrated photovoltaic BIPV_UPV.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MYSQL Presentation for SQL database connectivity
Advanced methodologies resolving dimensionality complications for autism neur...
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Spectroscopy.pptx food analysis technology
MIND Revenue Release Quarter 2 2025 Press Release
Spectral efficient network and resource selection model in 5G networks
The AUB Centre for AI in Media Proposal.docx

vector QUANTIZATION

  • 1. Vector Quantization Aniruddh Tyagi 02-06-12
  • 2. Voronoi Region • Blocks: o A sequence of audio. o A block of image pixels. Formally: vector example: (0.2, 0.3, 0.5, 0.1) • A vector quantizer maps k-dimensional vectors in the vector space Rk into a finite set of vectors Y = {yi: i = 1, 2, ..., N}. Each vector yi is called a code vector or a codeword. and the set of all the codewords is called a codebook. Associated with each codeword, yi, is a nearest neighbor region called Voronoi region, and it is defined by: • The set of Voronoi regions partition the entire space Rk .
  • 3. Two Dimensional Voronoi Diagram Codewords in 2-dimensional space. Input vectors are marked with an x, codewords are marked with red circles, and the Voronoi regions are separated with boundary lines.
  • 4. The Schematic of a Vector Quantizer
  • 5. Compression Formula • Amount of compression: o Codebook size is K, input vector of dimension L o In order to inform the decoder of which code vector is selected, we need to use bits.  E.g. need 8 bits to represent 256 code vectors. o Rate: each code vector contains the reconstruction value of L source output samples, the number of bits per sample would be: . o Sample: a scalar value in vector. o K: level of vector quantizer.
  • 6. VQ vs SQ Advantage of VQ over SQ: • For a given rate, VQ results in a lower distortion than SQ. • If the source output is correlate, vectors of source output values will tend to fall in clusters. o E.g. Sayood’s book Exp 9.3.1 • Even if no dependency: greater flexibility. o E.g. Sayood’s book Exp 9.3.2
  • 7. Algorithms • Lloyd algorithm: pdf-optimized quantizer, assume that distribution is known • LBG: (VQ) o Continuous (require integral ooperation) o Modified: with training set.
  • 8. LBG Algorithm – Determine the number of codewords, N, or the size of the codebook. – Select N codewords at random, and let that be the initial codebook. The initial codewords can be randomly chosen from the set of input vectors. – Using the Euclidean distance measure clusterize the vectors around each codeword. This is done by taking each input vector and finding the Euclidean distance between it and each codeword. The input vector belongs to the cluster of the codeword that yields the minimum distance.
  • 9. LBG Algorithm (contd.) 4.Compute the new set of codewords. This is done by obtaining the average of each cluster. Add the component of each vector and divide by the number of vectors in the cluster. where i is the component of each vector (x, y, z, ... directions), m is the number of vectors in the cluster. 5.Repeat steps 2 and 3 until the either the codewords don't change or the change in the codewords is small.
  • 10. Other Algorithms • Problems: LBG is a greedy algorithm, may fall into Local minimum. • Four methods selecting initial vectors: o Random o Splitting ( with perturbation vector) Animation o Train with different subset o PNN (pairwise nearest neighbor) • Empty cell problem: o No input correspond to a output vector o Solution: give to other clusters, e.g. most populate cluster.
  • 11. LBG for image compression • Taking blocks of images as vector L=NM. • If K vectors in code book: o need to use bits. o Rate: • The higher the value K, the better quality, but lower compression ratio. • Overhead to transmit code book: • Train with a set of images.
  • 12. Rate_Dimension Product • Rate-dimension product o The size of the codebook increase exponentially with the rate. o Suppose we want to encode a source using R bits/sample. If we use an L-d quantizer, we would group L samples together into vectors. This means that we would have RL bits available to represent wach vector. o With RL bits, we can represent 2^(RL) output vectors.
  • 13. Tree structured VQ • Set vectors in different quadrant. Only signs of vectors need to be compared. Thus reduce the number of comparisons by 2^L for L-d vector problem. • It works well for symmetric distribution. But not when we lose more and more symmetry.
  • 14. Tree Structured Vector Quantizer • Extend to non-symmetric case: o Divide the set of output points into two groups, g0 and g1, and assign to each group a test vector s.t. output points in each group are closer to test vector assigned to that group than to the test vector assigned to the other group. o Label the two test vectors 0 and 1. o When we got an input vector, compare it against the test vectors. Depending on the outcome, the input is compared to the output points associated with the test vector closest to the input. o After these two comparisons, we can discard half of the output points. o Comparison with the test vectors takes the place of looking at the signs of the components to decide which set of output points to discard from contention. o If the total number of output points is K, we make( K/2)+2 comparisons instead of K comparisons. o Can continue to expand the number of groups. Finally: 2logK comparisons instead of K.( 2 comparisons with the test vectors and a total of logK stages
  • 15. Tree Structured VQ (continued) • Since the test vectors are assigned to groups: 0, 1, 00,01,10,11,000,001,010,011,100,101,110,111 etc. which are the nodes of a binary tree, the VQ has the name “Tree Structured VQ”. • Penalty: o Possible increase in distortion: it is possible that at some stage the input is closer to one test vector while at the same time being closest to an output belonging to the rejected group. o Increase storage: output points from VQ codebook plus the test vectors.
  • 16. Additional Links • Slides are adapted from: http://www.data-compression.com and http://www.geocities.com/mohamedqase m/vectorquantization/vq.html